A common concern when a policy-maker draws causal inferences and makes decisions from observational data is that the measured covariates are insufficiently rich to account for all sources of confounding, i.e., the standard no confoundedness assumption fails to hold. The recently proposed proximal causal inference framework shows that proxy variables can be leveraged to identify causal effects and therefore facilitate decision-making. Building upon this line of work, we propose a novel optimal individualized treatment regime based on so-called outcome-inducing and treatment-inducing confounding bridges. We then show that the value function of this new optimal treatment regime is superior to that of existing ones in the literature. Theoretical guarantees, including identification, superiority, and excess value bound of the estimated regime, are established. Moreover, we demonstrate the proposed optimal regime via numerical experiments and a real data application.
translated by 谷歌翻译
A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path, the image formation would be blocked. Here, we report the first demonstration of unidirectional imagers, presenting polarization-insensitive and broadband unidirectional imaging based on successive diffractive layers that are linear and isotropic. These diffractive layers are optimized using deep learning and consist of hundreds of thousands of diffractive phase features, which collectively modulate the incoming fields and project an intensity image of the input onto an output FOV, while blocking the image formation in the reverse direction. After their deep learning-based training, the resulting diffractive layers are fabricated to form a unidirectional imager. As a reciprocal device, the diffractive unidirectional imager has asymmetric mode processing capabilities in the forward and backward directions, where the optical modes from B to A are selectively guided/scattered to miss the output FOV, whereas for the forward direction such modal losses are minimized, yielding an ideal imaging system between the input and output FOVs. Although trained using monochromatic illumination, the diffractive unidirectional imager maintains its functionality over a large spectral band and works under broadband illumination. We experimentally validated this unidirectional imager using terahertz radiation, very well matching our numerical results. Using the same deep learning-based design strategy, we also created a wavelength-selective unidirectional imager, where two unidirectional imaging operations, in reverse directions, are multiplexed through different illumination wavelengths. Diffractive unidirectional imaging using structured materials will have numerous applications in e.g., security, defense, telecommunications and privacy protection.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose the SegVit. Previous ViT-based segmentation networks usually learn a pixel-level representation from the output of the ViT. Differently, we make use of the fundamental component -- attention mechanism, to generate masks for semantic segmentation. Specifically, we propose the Attention-to-Mask (ATM) module, in which the similarity maps between a set of learnable class tokens and the spatial feature maps are transferred to the segmentation masks. Experiments show that our proposed SegVit using the ATM module outperforms its counterparts using the plain ViT backbone on the ADE20K dataset and achieves new state-of-the-art performance on COCO-Stuff-10K and PASCAL-Context datasets. Furthermore, to reduce the computational cost of the ViT backbone, we propose query-based down-sampling (QD) and query-based up-sampling (QU) to build a Shrunk structure. With the proposed Shrunk structure, the model can save up to $40\%$ computations while maintaining competitive performance.
translated by 谷歌翻译
成像检查(例如胸部X射线照相)将产生一小部分常见发现和一组少数罕见的发现。虽然训练有素的放射科医生可以通过研究一些代表性的例子来学习罕见条件的视觉呈现,但是教机器从这种“长尾”分布中学习的情况更加困难,因为标准方法很容易偏向最常见的类别。在本文中,我们介绍了胸部X射线胸腔疾病特定领域的长尾学习问题的全面基准研究。我们专注于从自然分布的胸部X射线数据中学习,不仅优化了分类精度,不仅是常见的“头”类,而且还优化了罕见但至关重要的“尾巴”类。为此,我们引入了一个具有挑战性的新长尾X射线基准,以促进开发长尾学习方法进行医学图像分类。该基准由两个用于19-和20向胸部疾病分类的胸部X射线数据集组成,其中包含多达53,000的类别,只有7个标记的训练图像。我们在这种新的基准上评估了标准和最先进的长尾学习方法,分析这些方法的哪些方面对长尾医学图像分类最有益,并总结了对未来算法设计的见解。数据集,训练有素的模型和代码可在https://github.com/vita-group/longtailcxr上找到。
translated by 谷歌翻译
尽管在过去几年中取得了重大进展,但使用单眼图像进行深度估计仍然存在挑战。首先,训练度量深度预测模型的训练是不算气的,该预测模型可以很好地推广到主要由于训练数据有限的不同场景。因此,研究人员建立了大规模的相对深度数据集,这些数据集更容易收集。但是,由于使用相对深度数据训练引起的深度转移,现有的相对深度估计模型通常无法恢复准确的3D场景形状。我们在此处解决此问题,并尝试通过对大规模相对深度数据进行训练并估算深度转移来估计现场形状。为此,我们提出了一个两阶段的框架,该框架首先将深度预测到未知量表并从单眼图像转移,然后利用3D点云数据来预测深度​​移位和相机的焦距,使我们能够恢复恢复3D场景形状。由于两个模块是单独训练的,因此我们不需要严格配对的培训数据。此外,我们提出了图像级的归一化回归损失和基于正常的几何损失,以通过相对深度注释来改善训练。我们在九个看不见的数据集上测试我们的深度模型,并在零拍摄评估上实现最先进的性能。代码可用:https://git.io/depth
translated by 谷歌翻译
移动对象(DATMO)的检测和跟踪是自动驾驶环境感知的重要组成部分。虽然使用环绕视图摄像机的3D检测器只是蓬勃发展,但越来越多的趋势是使用不同的基于变压器的方法从透视图的2D特征图中学习3D空间中的查询。本文提出了稀疏的R-CNN 3D(SRCN3D),这是一种新颖的两阶段全横向卷积映射管道,用于环绕视图摄像机检测和跟踪。 SRCN3D采用了级联结构,具有固定数量的提案盒和提案潜在功能的双轨更新。预计提案框可以透视视图,以汇总感兴趣的区域(ROI)本地特征。基于此,提案功能通过动态实例交互式头部进行完善,然后生成分类,并应用于原始边界框。与先前的艺术相比,我们的稀疏功能采样模块仅利用本地2D功能来调整每个相应的3D提案盒,从而导致完整的稀疏范式。提案功能和外观特征均在数据关联过程中采用多刺激性3D多对象跟踪方法。 Nuscenes数据集的广泛实验证明了我们提出的SRCN3D检测器和跟踪器的有效性。代码可在https://github.com/synsin0/srcn3d上找到。
translated by 谷歌翻译
如何快速自动自动挖掘有效的信息并提供投资决策,吸引了学术界和行业的更多关注。全球大流行已经提出了新的挑战。本文提出了一个两阶段的alphamldigger,可有效地发现高度波动的市场回报。在第1阶段,提出了一个深层的NLP模型,以将Sina Microblog上的博客转移到市场情绪。在第2阶段,预测的市场情绪与社交网络指标功能和股票市场历史记录功能相结合,以使用不同的机器学习模型和优化器来预测股票移动。结果表明,我们的alphamldigger在测试集中的准确性比以前的作品更高,并且在某种程度上对Covid-19的负面影响是牢固的。
translated by 谷歌翻译
已经在图形图上进行了大量研究,但是许多现有方法仅着重于优化图形布局的特定美学方面。给定图形,生成满足某些人类美学偏好的良好布局仍然是一项具有挑战性的任务,尤其是如果无法将这种偏好表示为可区分的目标函数。在本文中,我们提出了一个基于学生教师GAN的图形绘图框架SmartGD,该框架学会了绘制图形,就像人类学习执行任务一样。 SmartGD中的学生网络通过模仿良好的布局示例来学习图形,而SmartGD的教师网络负责提供有关生成布局优点的评分。当缺乏具体的审美标准来指定构成良好布局的内容时,学生网络可以从良好的布局示例中学习。另一方面,如果可以通过定量标准评估布局的好处(即使不是可区分的),学生网络可以将其用作优化目标美学的具体目标。为了实现目标,我们提出了一种新颖的gan变体,自挑战的gan,以了解有关任何审美标准的最佳布局分布,无论标准是否可区分。所提出的图形绘图框架不仅可以以与良好的布局示例相似的样式绘制图形,而且还可以根据任何给定的美学标准优化图形布局。一旦训练了模型,就可以根据示例布局的样式或所选美学标准可视化任意图。全面的实验研究表明,根据普通商定的指标,SMARTGD优于12种基准方法。
translated by 谷歌翻译
打开世界对象检测(OWOD),模拟知识持续增长的真正动态世界,试图检测已知和未知的类别,并逐步学习所识别的未知组。我们发现,尽管以前的欧瓦德工作建设性地提出了OWOD定义,但实验设置与不合逻辑的基准,令人困惑的度量计算和不当方法是不合理的。在本文中,我们重新思考OWOD实验环境,并提出了五项基本基准原则,以指导OWOD基准建设。此外,我们设计了两个特定于OWOD问题的公平评估协议,从未知课程的角度填充了评估的空白。此外,我们介绍了一个新颖且有效的OWOD框架,其中包含辅助提案顾问(PAD)和特定于类驱逐分类器(CEC)。非参数垫可以帮助RPN识别无需监控的准确未知提案,而CEC通过特定于类的驱逐函数校准过自信的激活边界并滤除令人困惑的预测。在我们的公平基准上进行的综合实验表明,我们的方法在现有的和我们的新指标方面表明了其他最先进的对象检测方法。\脚注{我们的基准和代码可在https://github.com提供/重新驱动/重新驱动。
translated by 谷歌翻译